Reimbursement
All the 'Black Mirror' Season 7 Episodes Ranked
Every day, the world seems to be slipping further and further into dystopia, with President Donald Trump placing tariffs on islands inhabited by penguins and the country's head of Medicare and Medicaid touting AI-first healthcare. In case you needed an even higher dose of Orwellian anxiety in your life, though, Black Mirror has finally returned for season 7 with six brand new episodes. In its new season, the anthology series about our, shall we say, complicated relationship with technology takes on AI sentience, subscription pricing models, lost loves, high school grudges, and the privatization of health care. It's also got plenty of action, romance, and a heaping helping of tech-era terror. As with any anthology series, Black Mirror has plenty of hits, and also its share of misses, and season 7 is no exception, which only makes it more perfect for ranking.
Dr. Oz Pushed for AI Health Care in First Medicare Agency Town Hall
Dr. Mehmet Oz, the new administrator for the Centers for Medicare and Medicaid Services (CMS), spent much of his first all-staff meeting on Monday promoting the use of artificial intelligence at the agency and praising Robert F. Kennedy Jr.'s "Make America Healthy Again Initiative," sources tell WIRED. During the meeting, Oz discussed possibly prioritizing AI avatars over frontline health care workers. Oz claimed that if a patient went to a doctor for a diabetes diagnosis, it would be 100 per hour, while an appointment with an AI avatar would cost considerably less, at just 2 an hour. Oz also claimed that patients have rated the care they've received from an AI avatar as equal to or better than a human doctor. Because of technologies like machine learning and AI, Oz claimed, it is now possible to scale "good ideas" in an affordable and fast way.
Trenton Chang 1 Lindsay Warrenburg
In many settings, machine learning models may be used to inform decisions that impact individuals or entities who interact with the model. Such entities, or agents, may game model decisions by manipulating their inputs to the model to obtain better outcomes and maximize some utility. We consider a multi-agent setting where the goal is to identify the "worst offenders:" agents that are gaming most aggressively. However, identifying such agents is difficult without being able to evaluate their utility function. Thus, we introduce a framework featuring a gaming deterrence parameter, a scalar that quantifies an agent's (un)willingness to game. We show that this gaming parameter is only partially identifiable. By recasting the problem as a causal effect estimation problem where different agents represent different "treatments," we prove that a ranking of all agents by their gaming parameters is identifiable. We present empirical results in a synthetic data study validating the usage of causal effect estimation for gaming detection and show in a case study of diagnosis coding behavior in the U.S. that our approach highlights features associated with gaming.
Agent-Enhanced Large Language Models for Researching Political Institutions
Loffredo, Joseph R., Yun, Suyeol
The applications of Large Language Models (LLMs) in political science are rapidly expanding. This paper demonstrates how LLMs, when augmented with predefined functions and specialized tools, can serve as dynamic agents capable of streamlining tasks such as data collection, preprocessing, and analysis. Central to this approach is agentic retrieval-augmented generation (Agentic RAG), which equips LLMs with action-calling capabilities for interaction with external knowledge bases. Beyond information retrieval, LLM agents may incorporate modular tools for tasks like document summarization, transcript coding, qualitative variable classification, and statistical modeling. To demonstrate the potential of this approach, we introduce CongressRA, an LLM agent designed to support scholars studying the U.S. Congress. Through this example, we highlight how LLM agents can reduce the costs of replicating, testing, and extending empirical research using the domain-specific data that drives the study of political institutions.
Auspex: Building Threat Modeling Tradecraft into an Artificial Intelligence-based Copilot
Crossman, Andrew, Plummer, Andrew R., Sekharudu, Chandra, Warrier, Deepak, Yekrangian, Mohammad
We present Auspex - a threat modeling system built using a specialized collection of generative artificial intelligence-based methods that capture threat modeling tradecraft. This new approach, called tradecraft prompting, centers on encoding the on-the-ground knowledge of threat modelers within the prompts that drive a generative AI-based threat modeling system. Auspex employs tradecraft prompts in two processing stages. The first stage centers on ingesting and processing system architecture information using prompts that encode threat modeling tradecraft knowledge pertaining to system decomposition and description. The second stage centers on chaining the resulting system analysis through a collection of prompts that encode tradecraft knowledge on threat identification, classification, and mitigation. The two-stage process yields a threat matrix for a system that specifies threat scenarios, threat types, information security categorizations and potential mitigations. Auspex produces formalized threat model output in minutes, relative to the weeks or months a manual process takes. More broadly, the focus on bespoke tradecraft prompting, as opposed to fine-tuning or agent-based add-ons, makes Auspex a lightweight, flexible, modular, and extensible foundational system capable of addressing the complexity, resource, and standardization limitations of both existing manual and automated threat modeling processes. In this connection, we establish the baseline value of Auspex to threat modelers through an evaluation procedure based on feedback collected from cybersecurity subject matter experts measuring the quality and utility of threat models generated by Auspex on real banking systems. We conclude with a discussion of system performance and plans for enhancements to Auspex.
The Illusion of Rights based AI Regulation
Whether and how to regulate AI is one of the defining questions of our times - a question that is being debated locally, nationally, and internationally. We argue that much of this debate is proceeding on a false premise. Specifically, our article challenges the prevailing academic consensus that the European Union's AI regulatory framework is fundamentally rights-driven and the correlative presumption that other rights-regarding nations should therefore follow Europe's lead in AI regulation. Rather than taking rights language in EU rules and regulations at face value, we show how EU AI regulation is the logical outgrowth of a particular cultural, political, and historical context. We show that although instruments like the General Data Protection Regulation (GDPR) and the AI Act invoke the language of fundamental rights, these rights are instrumentalized - used as rhetorical cover for governance tools that address systemic risks and maintain institutional stability. As such, we reject claims that the EU's regulatory framework and the substance of its rules should be adopted as universal imperatives and transplanted to other liberal democracies. To add weight to our argument from historical context, we conduct a comparative analysis of AI regulation in five contested domains: data privacy, cybersecurity, healthcare, labor, and misinformation. This EU-US comparison shows that the EU's regulatory architecture is not meaningfully rights-based. Our article's key intervention in AI policy debates is not to suggest that the current American regulatory model is necessarily preferable but that the presumed legitimacy of the EU's AI regulatory approach must be abandoned.
ML-Driven Approaches to Combat Medicare Fraud: Advances in Class Imbalance Solutions, Feature Engineering, Adaptive Learning, and Business Impact
Farahmandazad, Dorsa, Danesh, Kasra
Medicare fraud poses a substantial challenge to healthcare systems, resulting in significant financial losses and undermining the quality of care provided to legitimate beneficiaries. This study investigates the use of machine learning (ML) to enhance Medicare fraud detection, addressing key challenges such as class imbalance, high-dimensional data, and evolving fraud patterns. A dataset comprising inpatient claims, outpatient claims, and beneficiary details was used to train and evaluate five ML models: Random Forest, KNN, LDA, Decision Tree, and AdaBoost. Data preprocessing techniques included resampling SMOTE method to address the class imbalance, feature selection for dimensionality reduction, and aggregation of diagnostic and procedural codes. Random Forest emerged as the best-performing model, achieving a training accuracy of 99.2% and validation accuracy of 98.8%, and F1-score (98.4%). The Decision Tree also performed well, achieving a validation accuracy of 96.3%. KNN and AdaBoost demonstrated moderate performance, with validation accuracies of 79.2% and 81.1%, respectively, while LDA struggled with a validation accuracy of 63.3% and a low recall of 16.6%. The results highlight the importance of advanced resampling techniques, feature engineering, and adaptive learning in detecting Medicare fraud effectively. This study underscores the potential of machine learning in addressing the complexities of fraud detection. Future work should explore explainable AI and hybrid models to improve interpretability and performance, ensuring scalable and reliable fraud detection systems that protect healthcare resources and beneficiaries.
Healthcare cost prediction for heterogeneous patient profiles using deep learning models with administrative claims data
Morid, Mohammad Amin, Sheng, Olivia R. Liu
Problem: How can we design patient cost prediction models that effectively address the challenges of heterogeneity in administrative claims (AC) data to ensure accurate, fair, and generalizable predictions, especially for high-need (HN) patients with complex chronic conditions? Relevance: Accurate and equitable patient cost predictions are vital for developing health management policies and optimizing resource allocation, which can lead to significant cost savings for healthcare payers, including government agencies and private insurers. Addressing disparities in prediction outcomes for HN patients ensures better economic and clinical decision-making, benefiting both patients and payers. Methodology: This study is grounded in socio-technical considerations that emphasize the interplay between technical systems (e.g., deep learning models) and humanistic outcomes (e.g., fairness in healthcare decisions). It incorporates representation learning and entropy measurement to address heterogeneity and complexity in data and patient profiles, particularly for HN patients. We propose a channel-wise deep learning framework that mitigates data heterogeneity by segmenting AC data into separate channels based on types of codes (e.g., diagnosis, procedures) and costs. This approach is paired with a flexible evaluation design that uses multi-channel entropy measurement to assess patient heterogeneity. Results: The proposed channel-wise models reduce prediction errors by 23% compared to single-channel models, leading to 16.4% and 19.3% reductions in overpayments and underpayments, respectively. Notably, the reduction in prediction bias is significantly higher for HN patients, demonstrating effectiveness in handling heterogeneity and complexity in data and patient profiles. This demonstrates the potential for applying channel-wise modeling to domains with similar heterogeneity challenges.